Search Results: "metal"

28 April 2020

Antoine Beaupr : Drowned my camera: dealing with liquid spills in electronics

Folks who acutely dig into this website might know that I have been taking more pictures recently, as I got a new camera since January 2018: a beautiful Fujifilm X-T2 that I really like. Recently, I went out on a photo shoot in the rain. It was intermittent, light rain when I left so I figured the "weather proofing" (dpreview.com calls this "environmentally sealed") would keep the camera secure. After an hour of walking outside, however, rain intensified and I was just quickly becoming more and more soaked. Still trusting the camera would function, I carried on. But after about 90 minutes of dutiful work, the camera just turned off and wouldn't power back on. It had drowned. I couldn't believe it; "but this is supposed to be waterproof! This can't be happening!", I thought. I tried swapping out the battery for a fresh one, which was probably a bad idea (even if I was smart enough to do this under cover): still no luck, yet I could still not believe it was dead, so I figured I would look at it later when I was home. I still eventually removed the battery after a while, remembering that it mattered. Turns out the camera was really dead. Even at home, it wouldn't power up, even with fresh batteries. After closer inspection, the camera was as soaked as I was...
Two Sandisk memory cards with water droplets on them ...even the SD cards were wet!
I was filled with despair! My precious camera! I had been waiting for litterally decades to find the right digital camera that was as close to the good old film cameras I was used to. I was even working on black and white "film" to get back to basics, which turned into a project to witness the impact of the coronavirus on city life! All that was lost, or at least stopped: amazingly, the SD cards were just absolutely fine and survived the flooding without problem.
A one-way sign broken, fallen on the side in a gray cityscape The last photo my camera took before it died
A good photographer friend told me that this was actually fairly common: "if you shoot outside, get used to this, it will happen". So I tried "the rice trick": plunge your camera in a pile of rice and let it rest there for a long time. It didn't work so well: I didn't have a big enough container to hold the camera and the rice. I was also worried about rice particles inserting themselves into the camera holes, as I had opened all the ports to let it dry. I could also not keep myself from inserting a battery and trying it out again: amazingly, it powered up, only once, and died again. After shopping in desperation for dessicators (who would have thought you should keep those little bags from the stuff you order online!), I ended up buying silica gel dehumidifier from Lee Valley (13$, the small one!) which comes in a neat little metal box. But that never arrived in time so I had to find another solution. My partner threw the idea out in jest, but the actual solution worked, and it was surprisingly simple!
My camera and lens drying in a food dehydrator, at 30 C with 22 hours left Tada! Turns out you can dehydrate hardware too!
We have a food dehydrator at home (a Sedna Express if you really want to know) since we do a lot of backpacking and canot-camping, but I never thought I would put electronics in there. Turns out a food dehydrator is perfect: it has a per degree temperature control that can go very low and a timer. I set it to 30 C for 24 hours. (I originally set it to 40 C but it smelled like plastic after a while so my partner turned it off thinking it was melting the camera.) And now the camera is back! I was so happy! There is probably some permanent damage to the delicate circuitry in the camera. And I will probably not go back out in heavy rain again with the camera, or at least not without a rainjacket (35$USD at B&H) on the camera. And I am now in a position to tell other people what to do if they suffer the same fate...

Tips for dealing with electronic liquid damage So, lessons learned...
  1. when you have a liquid spill over your electronics: IMMEDIATELY REMOVE ALL ELECTRIC POWER, including the battery! (this is another reason why all batteries should be removable)
  2. if the spill is "sticky" (e.g. coffee, beer, maple syrup, etc) or "salty", do try to wash it with water, yet without flooding it any further (delicate balance, I know) some devices are especially well adapted to this: I have washed a keyboard with a shower head and drowned the thing completely, it worked fine after drying.
  3. do NOT power it back on until you are certain the equipment is dry
  4. let the electronics device dry for 24 to 48 hours with all ports open in a humidity-absorbing environment: a bag of rice works, but a food dehydrator is best. make sure the rice doesn't get stuck inside the machine: use a small mesh bag if necessary
  5. once you are confident the device has dried, fiddle with the controls and see if water comes out: it might not have dried because it was stuck inside a button or dial. if dry, try powering it back on and watch the symptoms. if it's still weird, try drying it for another day.
  6. if you get tired of waiting and the machine doesn't come back up, you will have to send it to the repair shop or open it up yourself to see if there is soldering damage you can fix.
I hope it might help careless people who dropped their coffee or ran out in the rain, believing the hype of waterproof cameras. Amateur tip: waterproof cameras are not waterproof...

1 April 2020

Joey Hess: DIN distractions

My offgrid house has an industrial automation panel. A row of electrical devices, mounted on a metal rail. Many wires neatly extend from it above and below, disappearing into wire gutters. I started building this in February, before covid-19 was impacting us here, when lots of mail orders were no big problem, and getting an unusual 3D-printed DIN rail bracket for a SSD was just a couple clicks. I finished a month later, deep into social isolation and quarentine, scrounging around the house for scrap wire, scavenging screws from unused stuff and cutting them to size, and hoping I would not end up in a "need just one more part that I can't get" situation. It got rather elaborate, and working on it was often a welcome distraction from the news when I couldn't concentrate on my usual work. I'm posting this now because people sometimes tell me they like hearing about my offfgrid stuff, and perhaps you could use a distraction too. The panel has my house's computer on it, as well as both AC and DC power distribution, breakers, and switching. Since the house is offgrid, the panel is designed to let every non-essential power drain be turned off, from my offgrid fridge to the 20 terabytes of offline storage to the inverter and satellite dish, the spring pump for my gravity flow water system, and even the power outlet by the kitchen sink. Saving power is part of why I'm using old-school relays and stuff and not IOT devices, the other reason is of course: IOT devices are horrible dystopian e-waste. I'm taking the utopian Star Trek approach, where I can command "full power to the vacuum cleaner!" Two circuit boards, connected by numerous ribbon cables, and clearly hand-soldered. The smaller board is suspended above the larger. An electrical schematic, of moderate complexity. At the core of the panel, next to the cubietruck arm board, is a custom IO daughterboard. Designed and built by hand to fit into a DIN mount case, it uses every GPIO pin on the cubietruck's main GPIO header. Making this board took 40+ hours, and was about half the project. It got pretty tight in there. This was my first foray into DIN rail mount, and it really is industrial lego -- a whole universe of parts that all fit together and are immensely flexible. Often priced more than seems reasonable for a little bit of plastic and metal, until you look at the spec sheets and the ratings. (Total cost for my panel was $400.) It's odd that it's not more used outside its niche -- I came of age in the Bay Area, surrounded by rack mount equipment, but no DIN mount equipment. Hacking the hardware in a rack is unusual, but DIN invites hacking. Admittedly, this is a second system kind of project, replacing some unsightly shelves full of gear and wires everywhere with something kind of overdone. But should be worth it in the long run as new gear gets clipped into place and it evolves for changing needs. Also, wire gutters, where have you been all my life? A cramped utility room with an entire wall covered with electronic gear, including the DIN rail, which is surrounded by wire gutters Detail of a wire gutter with the cover removed. Numerous large and small wires run along it and exit here and there. Finally, if you'd like to know what everything on the DIN rail is, from left to right: Ground block, 24v DC disconnect, fridge GFI, spare GFI, USB hub switch, computer switch, +24v block, -24v block, IO daughterboard, 1tb SSD, arm board, modem, 3 USB hubs, 5 relays, AC hot block, AC neutral block, DC-DC power converters, humidity sensor. Full width of DIN rail.

20 March 2020

Louis-Philippe V ronneau: Today (March 20th 2020) is the day to buy music on Bandcamp

Hey folks, This is a quick blog post to tell you Bandcamp is waiving all their fees on March 20th 2020 (PST). Spread the word, as every penny spent on the platform that day will go back to the artists. COVID-19 is throwing us all a mean curveball and artists have it especially rough, particularly those who were in the middle of tours and had to cancel them. If you like Metal, Angry Metal Guy posted a very nice list of artists you might know and want to help out. If you are lucky enough to have a little coin around, now is the time to show support for the artists you like. Buy an album you liked and copied from a friend or get some merch to wear to your next virtual beer night with your (remote) friends! Stay safe and don't forget to wash your hands regularly.

19 October 2017

Steinar H. Gunderson: Introducing Narabu, part 2: Meet the GPU

Narabu is a new intraframe video codec. You may or may not want to read part 1 first. The GPU, despite being extremely more flexible than it was fifteen years ago, is still a very different beast from your CPU, and not all problems map well to it performance-wise. Thus, before designing a codec, it's useful to know what our platform looks like. A GPU has lots of special functionality for graphics (well, duh), but we'll be concentrating on the compute shader subset in this context, ie., we won't be drawing any polygons. Roughly, a GPU (as I understand it!) is built up about as follows: A GPU contains 1 20 cores; NVIDIA calls them SMs (shader multiprocessors), Intel calls them subslices. (Trivia: A typical mid-range Intel GPU contains two cores, and thus is designated GT2.) One such core usually runs the same program, although on different data; there are exceptions, but typically, if your program can't fill an entire core with parallelism, you're wasting energy. Each core, in addition to tons (thousands!) of registers, also has some shared memory (also called local memory sometimes, although that term is overloaded), typically 32 64 kB, which you can think of in two ways: Either as a sort-of explicit L1 cache, or as a way to communicate internally on a core. Shared memory is a limited, precious resource in many algorithms. Each core/SM/subslice contains about 8 execution units (Intel calls them EUs, NVIDIA/AMD calls them something else) and some memory access logic. These multiplex a bunch of threads (say, 32) and run in a round-robin-ish fashion. This means that a GPU can handle memory stalls much better than a typical CPU, since it has so many streams to pick from; even though each thread runs in-order, it can just kick off an operation and then go to the next thread while the previous one is working. Each execution unit has a bunch of ALUs (typically 16) and executes code in a SIMD fashion. NVIDIA calls these ALUs CUDA cores , AMD calls them stream processors . Unlike on CPU, this SIMD has full scatter/gather support (although sequential access, especially in certain patterns, is much more efficient than random access), lane enable/disable so it can work with conditional code, etc.. The typically fastest operation is a 32-bit float muladd; usually that's single-cycle. GPUs love 32-bit FP code. (In fact, in some GPU languages, you won't even have 8-, 16-bit or 64-bit types. This is annoying, but not the end of the world.) The vectorization is not exposed to the user in typical code (GLSL has some vector types, but they're usually just broken up into scalars, so that's a red herring), although in some programming languages you can get to swizzle the SIMD stuff internally to gain advantage of that (there's also schemes for broadcasting bits by voting etc.). However, it is crucially important to performance; if you have divergence within a warp, this means the GPU needs to execute both sides of the if. So less divergent code is good. Such a SIMD group is called a warp by NVIDIA (I don't know if the others have names for it). NVIDIA has SIMD/warp width always 32; AMD used to be 64 but is now 16. Intel supports 4 32 (the compiler will autoselect based on a bunch of factors), although 16 is the most common. The upshot of all of this is that you need massive amounts of parallelism to be able to get useful performance out of a CPU. A rule of thumb is that if you could have launched about a thousand threads for your problem on CPU, it's a good fit for a GPU, although this is of course just a guideline. There's a ton of APIs available to write compute shaders. There's CUDA (NVIDIA-only, but the dominant player), D3D compute (Windows-only, but multi-vendor), OpenCL (multi-vendor, but highly variable implementation quality), OpenGL compute shaders (all platforms except macOS, which has too old drivers), Metal (Apple-only) and probably some that I forgot. I've chosen to go for OpenGL compute shaders since I already use OpenGL shaders a lot, and this saves on interop issues. CUDA probably is more mature, but my laptop is Intel. :-) No matter which one you choose, the programming model looks very roughly like this pseudocode:
for (size_t workgroup_idx = 0; workgroup_idx < NUM_WORKGROUPS; ++workgroup_idx)     // in parallel over cores
        char shared_mem[REQUESTED_SHARED_MEM];  // private for each workgroup
        for (size_t local_idx = 0; local_idx < WORKGROUP_SIZE; ++local_idx)    // in parallel on each core
                main(workgroup_idx, local_idx, shared_mem);
         
 
except in reality, the indices will be split in x/y/z for your convenience (you control all six dimensions, of course), and if you haven't asked for too much shared memory, the driver can silently make larger workgroups if it helps increase parallelity (this is totally transparent to you). main() doesn't return anything, but you can do reads and writes as you wish; GPUs have large amounts of memory these days, and staggering amounts of memory bandwidth. Now for the bad part: Generally, you will have no debuggers, no way of logging and no real profilers (if you're lucky, you can get to know how long each compute shader invocation takes, but not what takes time within the shader itself). Especially the latter is maddening; the only real recourse you have is some timers, and then placing timer probes or trying to comment out sections of your code to see if something goes faster. If you don't get the answers you're looking for, forget printf you need to set up a separate buffer, write some numbers into it and pull that buffer down to the GPU. Profilers are an essential part of optimization, and I had really hoped the world would be more mature here by now. Even CUDA doesn't give you all that much insight sometimes I wonder if all of this is because GPU drivers and architectures are meant to be shrouded in mystery for competitiveness reasons, but I'm honestly not sure. So that's it for a crash course in GPU architecture. Next time, we'll start looking at the Narabu codec itself.

12 October 2017

Joachim Breitner: Isabelle functions: Always total, sometimes undefined

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just Isabelle 1) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of what is a function, are function total and what is the business with undefined. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians. Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean.

HOL is a logic of total functions If I have a Isabelle function f :: a b between two types a and b (the function arrow in Isabelle is , not ), then by definition of what it means to be a function in HOL whenever I have a value x :: a, then the expression f x (i.e. f applied to x) is a value of type b. Therefore, and without exception, every Isabelle function is total. In particular, it cannot be that f x does not exist for some x :: a. This is a first difference from Haskell, which does have partial functions like
spin :: Maybe Integer -> Bool
spin (Just n) = spin (Just (n+1))
Here, neither the expression spin Nothing nor the expression spin (Just 42) produce a value of type Bool: The former raises an exception ( incomplete pattern match ), the latter does not terminate. Confusingly, though, both expressions have type Bool. Because every function is total, this confusion cannot arise in Isabelle: If an expression e has type t, then it is a value of type t. This trait is shared with other total systems, including Coq. Did you notice the emphasis I put on the word is here, and how I deliberately did not write evaluates to or returns ? This is because of another big source for confusion:

Isabelle functions do not compute We (i.e., functional programmers) stole the word function from mathematics and repurposed it2. But the word function , in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind. What is the difference?
  • A function a b in functional programming is an algorithm that, given a value of type a, calculates (returns, evaluates to) a value of type b.
  • A function a b in math (or Isabelle) associates with each value of type a a value of type b.
For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense:
definition foo :: "(nat   real)   real" where
  "foo seq = (if convergent seq then lim seq else 0)"
This assigns a real number to every sequence, but it does not compute it in any useful sense. From this it follows that

Isabelle functions are specified, not defined Consider this function definition:
fun plus :: "nat   nat   nat"  where
   "plus 0       m = m"
   "plus (Suc n) m = Suc (plus n m)"
To a functional programmer, this reads
plus is a function that analyses its first argument. If that is 0, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one.
which is clearly a description of a computation. But to Isabelle, the above reads
plus is a binary function on natural numbers, and it satisfies the following two equations:
And in fact, it is not so much Isabelle that reads it this way, but rather the fun command, which is external to the Isabelle logic. The fun command analyses the given equations, constructs a non-recursive definition of plus under the hood, passes that to Isabelle and then proves that the given equations hold for plus. One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define plus' by recursing on the second argument, we d obtain the the same function (i.e. plus = plus' is a theorem, and there would be no way of telling the two apart).

Termination is a property of specifications, not functions Because a function does not evaluate, it does not make sense to ask if it terminates. The question of termination arises before the function is defined: The fun command can only construct plus in a way that the equations hold if it passes a termination check very much like Fixpoint in Coq. But while the termination check of Fixpoint in Coq is a deep part of the basic logic, in Isabelle it is simply something that this particular command requires for its internal machinery to go through. At no point does a termination proof of the function exist as a theorem inside the logic. And other commands may have other means of defining a function that do not even require such a termination argument! For example, a function specification that is tail-recursive can be turned in to a function, even without a termination proof: The following definition describes a higher-order function that iterates its first argument f on the second argument x until it finds a fixpoint. It is completely polymorphic (the single quote in 'a indicates that this is a type variable):
partial_function (tailrec)
  fixpoint :: "('a   'a)   'a   'a"
where
  "fixpoint f x = (if f x = x then x else fixpoint f (f x))"
We can work with this definition just fine. For example, if we instantiate f with ( x. x-1), we can prove that it will always return 0:
lemma "fixpoint (  n . n - 1) (n::nat) = 0"
  by (induction n) (auto simp add: fixpoint.simps)
Similarly, if we have a function that works within the option monad (i.e. Maybe in Haskell), its specification can always be turned into a function without an explicit termination proof here one that calculates the Collatz sequence:
partial_function (option) collatz :: "nat   nat list option"
 where "collatz n =
        (if n = 1 then Some [n]
         else if even n
           then do   ns <- collatz (n div 2);    Some (n # ns)  
           else do   ns <- collatz (3 * n + 1);  Some (n # ns) )"
Note that lists in Isabelle are finite (like in Coq, unlike in Haskell), so this function returns a list only if the collatz sequence eventually reaches 1. I expect these definitions to make a Coq user very uneasy. How can fixpoint be a total function? What is fixpoint ( n. n+1)? What if we run collatz n for a n where the Collatz sequence does not reach 1?3 We will come back to that question after a little detour

HOL is a logic of non-empty types Another big difference between Isabelle and Coq is that in Isabelle, every type is inhabited. Just like the totality of functions, this is a very fundamental fact about what HOL defines to be a type. Isabelle gets away with that design because in Isabelle, we do not use types for propositions (like we do in Coq), so we do not need empty types to denote false propositions. This design has an important consequence: It allows the existence of a polymorphic expression that inhabits any type, namely
undefined :: 'a
The naming of this term alone has caused a great deal of confusion for Isabelle beginners, or in communication with users of different systems, so I implore you to not read too much into the name. In fact, you will have a better time if you think of it as arbitrary or, even better, unknown. Since undefined can be instantiated at any type, we can instantiate it for example at bool, and we can observe an important fact: undefined is not an extra value besides the usual ones . It is simply some value of that type, which is demonstrated in the following lemma:
lemma "undefined = True   undefined = False" by auto
In fact, if the type has only one value (such as the unit type), then we know the value of undefined for sure:
lemma "undefined = ()" by auto
It is very handy to be able to produce an expression of any type, as we will see as follows

Partial functions are just underspecified functions For example, it allows us to translate incomplete function specifications. Consider this definition, Isabelle s equivalent of Haskell s partial fromJust function:
fun fromSome :: "'a option   'a" where
  "fromSome (Some x) = x"
This definition is accepted by fun (albeit with a warning), and the generated function fromSome behaves exactly as specified: when applied to Some x, it is x. The term fromSome None is also a value of type 'a, we just do not know which one it is, as the specification does not address that. So fromSome None behaves just like undefined above, i.e. we can prove
lemma "fromSome None = False   fromSome None = True" by auto
Here is a small exercise for you: Can you come up with an explanation for the following lemma:
fun constOrId :: "bool   bool" where
  "constOrId True = True"
lemma "constOrId = ( _.True)   constOrId = ( x. x)"
  by (metis (full_types) constOrId.simps)
Overall, this behavior makes sense if we remember that function definitions in Isabelle are not really definitions, but rather specifications. And a partial function definition is simply a underspecification. The resulting function is simply any function hat fulfills the specification, and the two lemmas above underline that observation.

Nonterminating functions are also just underspecified Let us return to the puzzle posed by fixpoint above. Clearly, the function seen as a functional program is not total: When passed the argument ( n. n + 1) or ( b. b) it will loop forever trying to find a fixed point. But Isabelle functions are not functional programs, and the definitions are just specifications. What does the specification say about the case when f has no fixed-point? It states that the equation fixpoint f x = fixpoint f (f x) holds. And this equation has a solution, for example fixpoint f _ = undefined. Or more concretely: The specification of the fixpoint function states that fixpoint ( b. b) True = fixpoint ( b. b) False has to hold, but it does not specify which particular value (True or False) it should denote any is fine.

Not all function specifications are ok At this point you might wonder: Can I just specify any equations for a function f and get a function out of that? But rest assured: That is not the case. For example, no Isabelle command allows you define a function bogus :: () nat with the equation bogus () = Suc (bogus ()), because this equation does not have a solution. We can actually prove that such a function cannot exist:
lemma no_bogus: "  bogus. bogus () = Suc (bogus ())" by simp
(Of course, not_bogus () = not_bogus () is just fine )

You cannot reason about partiality in Isabelle We have seen that there are many ways to define functions that one might consider partial . Given a function, can we prove that it is not partial in that sense? Unfortunately, but unavoidably, no: Since undefined is not a separate, recognizable value, but rather simply an unknown one, there is no way of stating that A function result is not specified . Here is an example that demonstrates this: Two partial functions (one with not all cases specified, the other one with a self-referential specification) are indistinguishable from the total variant:
fun partial1 :: "bool   unit" where
  "partial1 True = ()"
partial_function (tailrec) partial2 :: "bool   unit" where
  "partial2 b = partial2 b"
fun total :: "bool   unit" where
  "total True = ()"
  "total False = ()"
lemma "partial1 = total   partial2 = total" by auto
If you really do want to reason about partiality of functional programs in Isabelle, you should consider implementing them not as plain HOL functions, but rather use HOLCF, where you can give equational specifications of functional programs and obtain continuous functions between domains. In that setting, () and partial2 = total. We have done that to verify some of HLint s equations.

You can still compute with Isabelle functions I hope by this point, I have not scared away anyone who wants to use Isabelle for functional programming, and in fact, you can use it for that. If the equations that you pass to fun are a reasonable definition for a function (in the programming sense), then these equations, used as rewriting rules, will allow you to compute that function quite like you would in Coq or Haskell. Moreover, Isabelle supports code extraction: You can take the equations of your Isabelle functions and have them expored into Ocaml, Haskell, Scala or Standard ML. See Concon for a conference management system with confidentially verified in Isabelle. While these usually are the equations you defined the function with, they don't have to: You can declare other proved equations to be used for code extraction, e.g. to refine your elegant definitions to performant ones. Like with code extraction from Coq to, say, Haskell, the adequacy of the translations rests on a moral reasoning foundation. Unlike extraction from Coq, where you have an (unformalized) guarantee that the resulting Haskell code is terminating, you do not get that guarantee from Isabelle. Conversely, this allows you do reason about and extract non-terminating programs, like fixpoint, which is not possible in Coq. There is currently ongoing work about verified code generation, where the code equations are reflected into a deep embedding of HOL in Isabelle that would allow explicit termination proofs.

Conclusion We have seen how in Isabelle, every function is total. Function declarations have equations, but these do not define the function in an computational sense, but rather specify them. Because in HOL, there are no empty types, many specifications that appear partial (incomplete patterns, non-terminating recursion) have solutions in the space of total functions. Partiality in the specification is no longer visible in the final product.

PS: Axiom undefined in Coq This section is speculative, and an invitation for discussion. Coq already distinguishes between types used in programs (Set) and types used in proofs Prop. Could Coq ensure that every t : Set is non-empty? I imagine this would require additional checks in the Inductive command, similar to the checks that the Isabelle command datatype has to perform4, and it would disallow Empty_set. If so, then it would be sound to add the following axiom
Axiom undefined : forall (a : Set), a.
wouldn't it? This axiom does not have any computational meaning, but that seems to be ok for optional Coq axioms, like classical reasoning or function extensionality. With this in place, how much of what I describe above about function definitions in Isabelle could now be done soundly in Coq. Certainly pattern matches would not have to be complete and could sport an implicit case _ undefined. Would it help with non-obviously terminating functions? Would it allow a Coq command Tailrecursive that accepts any tailrecursive function without a termination check?

  1. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.
  2. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.
  3. Let me know if you find such an n. Besides n = 0.
  4. Like fun, the constructions by datatype are not part of the logic, but create a type definition from more primitive notions that is isomorphic to the specified data type.

12 June 2017

Sven Hoexter: UEFI PXE preseeded Debian installation on HPE DL120

We bought a bunch of very cheap low end HPE DL120 server. Enough to warrant a completely automated installation setup. Shouldn't be that much of a deal, right? Get dnsmasq up and running, feed it a preseed.cfg and be done with it. In practise it took us more hours then we expected. Setting up the hardware Our hosts are equipped with an additional 10G dual port NIC and we'd like to use this NIC for PXE booting. That's possible, but it requires you to switch to UEFI boot. Actually it enables you to boot from any available NIC. Setting up dnsmasq We decided to just use the packaged debian-installer from jessie and do some ugly things like overwritting files in /usr/lib via ansible later on. So first of all install debian-installer-8-netboot-amd64 and dnsmasq, then enroll our additional config for dnsmasq, ours looks like this:
domain=int.foobar.example
dhcp-range=192.168.0.240,192.168.0.242,255.255.255.0,1h
dhcp-boot=bootnetx64.efi
pxe-service=X86-64_EFI, "Boot UEFI PXE-64", bootnetx64.efi
enable-tftp
tftp-root=/usr/lib/debian-installer/images/8/amd64/text
dhcp-option=3,192.168.0.1
dhcp-host=00:c0:ff:ee:00:01,192.168.0.123,foobar-01
Now you've to link /usr/lib/debian-installer/images/8/amd64/text/bootnetx64.efi to /usr/lib/debian-installer/images/8/amd64/text/debian-installer/amd64/bootnetx64.efi. That got us of the ground and we had a working UEFI PXE boot that got us into debian-installer. Feeding d-i the preseed file Next we added some grub.cfg settings and parameterized some basic stuff to be handed over to d-i via the kernel command line. You'll find the correct grub.cfg in /usr/lib/debian-installer/images/8/amd64/text/debian-installer/amd64/grub/grub.cfg. We added the following two lines to automate the start of the installer:
set default="0"
set timeout=5
and our kernel command line looks like this:
 linux    /debian-installer/amd64/linux vga=788 --- auto=true interface=eth1 netcfg/dhcp_timeout=60 netcfg/choose_interface=eth1 priority=critical preseed/url=tftp://192.168.0.2/preseed.cfg quiet
Important points: preseeed.cfg, GPT and ESP One of the most painful points was the fight to find out the correct preseed values to install with GPT to create a ESP (EFI system partition) and use LVM for /. Relevant settings are:
# auto method must be lvm
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-basicfilesystems/no_swap boolean false
# Keep that one set to true so we end up with a UEFI enabled
# system. If set to false, /var/lib/partman/uefi_ignore will be touched
d-i partman-efi/non_efi_system boolean true
# enforce usage of GPT - a must have to use EFI!
d-i partman-basicfilesystems/choose_label string gpt
d-i partman-basicfilesystems/default_label string gpt
d-i partman-partitioning/choose_label string gpt
d-i partman-partitioning/default_label string gpt
d-i partman/choose_label string gpt
d-i partman/default_label string gpt
d-i partman-auto/choose_recipe select boot-root-all
d-i partman-auto/expert_recipe string \
boot-root-all :: \
538 538 1075 free \
$iflabel  gpt   \
$reusemethod    \
method  efi   \
format    \
. \
128 512 256 ext2 \
$defaultignore    \
method  format   format    \
use_filesystem    filesystem  ext2   \
mountpoint  /boot   \
. \
1024 4096 15360 ext4 \
$lvmok    \
method  format   format    \
use_filesystem    filesystem  ext4   \
mountpoint  /   \
. \
1024 4096 15360 ext4 \
$lvmok    \
method  format   format    \
use_filesystem    filesystem  ext4   \
mountpoint  /var   \
. \
1024 1024 -1 ext4 \
$lvmok    \
method  format   format    \
use_filesystem    filesystem  ext4   \
mountpoint  /var/lib   \
.
# This makes partman automatically partition without confirmation, provided
# that you told it what to do using one of the methods above.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman-md/confirm boolean true
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
# This is fairly safe to set, it makes grub install automatically to the MBR
# if no other operating system is detected on the machine.
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i grub-installer/bootdev  string /dev/sda
I hope that helps to ease the processes to setup automated UEFI PXE installations for some other people out there still dealing with bare metal systems. Some settings took us some time to figure out, for example d-i partman-efi/non_efi_system boolean true required some searching on codesearch.d.n (amazing ressource if you're writing preseed files and need to find the correct templates) and reading scripts on git.d.o where you'll find the source for partman-* and grub-installer. Kudos Thanks especially to P.P. and M.K. to figure out all those details.

5 May 2017

Daniel Silverstone: Yarn architecture discussion

Recently Rob and I visited Soile and Lars. We had a lovely time wandering around Helsinki with them, and I also spent a good chunk of time with Lars working on some design and planning for the Yarn test specification and tooling. You see, I wrote a Rust implementation of Yarn called rsyarn "for fun" and in doing so I noted a bunch of missing bits in the understanding Lars and I shared about how Yarn should work. Lars and I filled, and re-filled, a whiteboard with discussion about what the 'Yarn specification' should be, about various language extensions and changes, and also about what functionality a normative implementation of Yarn should have. This article is meant to be a write-up of all of that discussion, but before I start on that, I should probably summarise what Yarn is.
Yarn is a mechanism for specifying tests in a form which is more like documentation than code. Yarn follows the concept of BDD story based design/testing and has a very Cucumberish scenario language in which to write tests. Yarn takes, as input, Markdown documents which contain code blocks with Yarn tests in them; and it then runs those tests and reports on the scenario failures/successes. As an example of a poorly written but still fairly effective Yarn suite, you could look at Gitano's tests or perhaps at Obnam's tests (rendered as HTML). Yarn is not trying to replace unit testing, nor other forms of testing, but rather seeks to be one of a suite of test tools used to help validate software and to verify integrations. Lars writes Yarns which test his server setups for example. As an example, lets look at what a simple test might be for the behaviour of the /bin/true tool:
SCENARIO true should exit with code zero
WHEN /bin/true is run with no arguments
THEN the exit code is 0
 AND stdout is empty
 AND stderr is empty
Anyone ought to be able to understand exactly what that test is doing, even though there's no obvious code to run. Yarn statements are meant to be easily grokked by both developers and managers. This should be so that managers can understand the tests which verify that requirements are being met, without needing to grok python, shell, C, or whatever else is needed to implement the test where the Yarns meet the metal. Obviously, there needs to be a way to join the dots, and Yarn calls those things IMPLEMENTS, for example:
IMPLEMENTS WHEN (\S+) is run with no arguments
set +e
"$ MATCH_1 " > "$ DATADIR /stdout" 2> "$ DATADIR /stderr"
echo $? > "$ DATADIR /exitcode"
As you can see from the example, Yarn IMPLEMENTS can use regular expressions to capture parts of their invocation, allowing the test implementer to handle many different scenario statements with one implementation block. For the rest of the implementation, whatever you assume about things will probably be okay for now.
Given all of the above, we (Lars and I) decided that it would make a lot of sense if there was a set of Yarn scenarios which could validate a Yarn implementation. Such a document could also form the basis of a Yarn specification and also a manual for writing reasonable Yarn scenarios. As such, we wrote up a three-column approach to what we'd need in that test suite. Firstly we considered what the core features of the Yarn language are: We considered unusual (or corner) cases and which of them needed defining in the short to medium term: All of this comes down to how to interpret input to a Yarn implementation. In addition there were a number of things we felt any "normative" Yarn implementation would have to handle or provide in order to be considered useful. It's worth noting that we don't specify anything about an implementation being a command line tool though... There's bound to be more, but right now with the above, we believe we have two roughly conformant Yarn implementations. Lars' Python based implementation which lives in cmdtest (and which I shall refer to as pyyarn for now) and my Rust based one (rsyarn).
One thing which rsyarn supports, but pyyarn does not, is running multiple scenarios in parallel. However when I wrote that support into rsyarn I noticed that there were plenty of issues with running stuff in parallel. (A problem I'm sure any of you who know about threads will appreciate). One particular issue was that scenarios often need to share resources which are not easily sandboxed into the $ DATADIR provided by Yarn. For example databases or access to limited online services. Lars and I had a good chat about that, and decided that a reasonable language extension could be:
USING database foo
with its counterpart
RESOURCE database (\S+)
LABEL database-$1
GIVEN a database called $1
FINALLY database $1 is torn down
The USING statement should be reasonably clear in its pairing to a RESOURCE statement. The LABEL statement I'll get to in a moment (though it's only relevant in a RESOURCE block, and the rest of the statements are essentially substituted into the calling scenario at the point of the USING. This is nowhere near ready to consider adding to the specification though. Both Lars and I are uncomfortable with the $1 syntax though we can't think of anything nicer right now; and the USING/RESOURCE/LABEL vocabulary isn't set in stone either. The idea of the LABEL is that we'd also require that a normative Yarn implementation be capable of specifying resource limits by name. E.g. if a RESOURCE used a LABEL foo then the caller of a Yarn scenario suite could specify that there were 5 foos available. The Yarn implementation would then schedule a maximum of 5 scenarios which are using that label to happen simultaneously. At bare minimum it'd gate new users, but at best it would intelligently schedule them. In addition, since this introduces the concept of parallelism into Yarn proper, we also wanted to add a maximum parallelism setting to the Yarn implementation requirements; and to specify that any resource label which was not explicitly set had a usage limit of 1.
Once we'd discussed the parallelism, we decided that once we had a nice syntax for expanding these sets of statements anyway, we may as well have a syntax for specifying scenario language expansions which could be used to provide something akin to macros for Yarn scenarios. What we came up with as a starter-for-ten was:
CALLING write foo
paired with
EXPANDING write (\S+)
GIVEN bar
WHEN $1 is written to
THEN success was had by all
Again, the CALLING/EXPANDING keywords are not fixed yet, nor is the $1 type syntax, though whatever is used here should match the other places where we might want it.
Finally we discussed multi-line inputs in Yarn. We currently have a syntax akin to:
GIVEN foo
... bar
... baz
which is directly equivalent to:
GIVEN foo bar baz
and this is achieved by collapsing the multiple lines and using the whitespace normalisation functionality of Yarn to replace all whitespace sequences with single space characters. However this means that, for example, injecting chunks of YAML into a Yarn scenario is a pain, as would be including any amount of another whitespace-sensitive input language. After a lot of to-ing and fro-ing, we decided that the right thing to do would be to redefine the ... Yarn statement to be whitespace preserving and to then pass that whitespace through to be matched by the IMPLEMENTS or whatever. In order for that to work, the regexp matching would have to be defined to treat the input as a single line, allowing . to match \n etc. Of course, this would mean that the old functionality wouldn't be possible, so we considered allowing a \ at the end of a line to provide the current kind of behaviour, rewriting the above example as:
GIVEN foo \
bar \
baz
It's not as nice, but since we couldn't find any real uses of ... in any of our Yarn suites where having the whitespace preserved would be an issue, we decided it was worth the pain.
None of the above is, as of yet, set in stone. This blog posting is about me recording the information so that it can be referred to; and also to hopefully spark a little bit of discussion about Yarn. We'd welcome emails to our usual addresses, being poked on Twitter, or on IRC in the common spots we can be found. If you're honestly unsure of how to get hold of us, just comment on this blog post and I'll find your message eventually. Hopefully soon we can start writing that Yarn suite which can be used to validate the behaviour of pyyarn and rsyarn and from there we can implement our new proposals for extending Yarn to be even more useful.

30 April 2017

Russ Allbery: Review: Periodic Tales

Review: Periodic Tales, by Hugh Aldersey-Williams
Publisher: HarperCollins
Copyright: February 2011
ISBN: 0-06-207881-X
Format: Kindle
Pages: 451
Perhaps my favorite chapter in Randall Munroe's What If? is his examination of what would happen if you assembled a periodic table from square blocks of each element. As with most What If? questions, the answer is "everyone in the vicinity dies," but it's all about the journey. The periodic table is full of so many elements that we rarely hear about but which have fascinating properties. It was partly in the memory of that chapter that I bought Periodic Tales on impulse after seeing a mention of it somewhere on the Internet (I now forget where). Hugh Aldersey-Williams is a journalist and author, but with a background in natural sciences. He also has a life-long hobby of collecting samples of the elements and attempting to complete his own private copy of the periodic table, albeit with considerably more precautions and sample containment than Munroe's thought experiment. Periodic Tales is inspired by that collection. It's a tour and cultural history of many of the elements, discussing their discovery, their role in commerce and industry, their appearance, and often some personal anecdotes. This is not exactly a chemistry book, although there's certainly some chemistry here, nor is it a history, although Aldersey-Williams usually includes some historical notes about each element he discusses. The best term might be an anthropology of the elements: a discussion of how they've influenced culture and an examination of the cultural assumptions and connections we've constructed around them. But primarily it's an idiosyncratic and personal tour of the things Aldersey-Williams found interesting about each one. Periodic Tales is not comprehensive. The completionist in me found that a bit disappointing, and there are a few elements that I think would have fit the overall thrust of the book but are missing. (Lithium and its connection to mental health and now computer batteries comes to mind.) It's also not organized in the obvious way, either horizontally or vertically along the periodic table. Instead, Aldersey-Williams has divided the elements he talks about into five major but fairly artificial divisions: power (primarily in the economic sense), fire (focused on burning and light), craft (the materials from which we make things), beauty, and earth. Obviously, these are fuzzy; silver appears in craft, but could easily be in power with gold. I'm not sure how defensible this division was. But it does, for good or for ill, break the reader's mind away from a purely chemical and analytical treatment and towards broader cultural associations. This cultural focus, along with Aldersey-Williams's clear and conversational style, are what pull this book firmly away from being a beautified recitation of facts that could be gleamed from Wikipedia. It also leads to some unexpected choices of focus. For example, the cultural touchstone he chooses for sodium is not salt (which is a broad enough topic for an entire book) but sodium street lights, the ubiquitous and color-distorting light of modern city nights, thus placing salt in the "fire" category of the book. Discussion of cobalt is focused on pigments: the brilliant colors of paint made possible by its many brightly-colored compounds. Arsenic is, of course, a poison, but it's also a source of green, widely used in wallpaper (and Aldersey-Williams discusses the connection with the controversial death of Napoleon). And the discussion of aluminum starts with a sculpture, and includes a fascinating discussion of "banalization" as we become used to use of a new metal, which the author continues when looking a titanium and its currently-occurring cultural transition between the simply new and modern and a well-established metal with its own unique cultural associations. One drawback of the somewhat scattered organization is that, while Periodic Tales provides fascinating glimmers of the history of chemistry and the search to isolate elements, those glimmers are disjointed and presented in no particular order. Recently-discovered metals are discussed alongside ancient ones, and the huge surge in elemental isolation in the 1800s is all jumbled together. Wikipedia has a very useful timeline that helps sort out one's sense of history, but there was a part of me left wanting a more structured presentation. I read books like this primarily for the fascinating trivia. Mercury: known in ancient times, but nearly useless, so used primarily for ritual and decoration (making the modern reader cringe). Relative abundancies of different elements, which often aren't at all what one might think. Rare earths (not actually that rare): isolated through careful, tedious work by Swedish mining chemists whom most people have never heard of, unlike the discoverers of many other elements. And the discovery of the noble gases, which is a fascinating bit of disruptive science made possible by new technology (the spectroscope), forcing a rethinking of the periodic table (which had no column for noble gases). I read a lot of this while on vacation and told interesting tidbits to my parents over breakfast or dinner. It's that sort of book. This is definitely in the popular science and popular writing category, for all the pluses and minuses that brings. It's not a detailed look at either chemistry or history. But it's very fun to read, it provides a lot of conversational material, and it takes a cultural approach that would not have previously occurred to me. Recommended if you like this sort of thing. Rating: 7 out of 10

23 April 2017

Mark Brown: Bronica Motor Drive SQ-i

I recently got a Bronica SQ-Ai medium format film camera which came with the Motor Drive SQ-i. Since I couldn t find any documentation at all about it on the internet and had to figure it out for myself I figured I d put what I figured out here. Hopefully this will help the next person trying to figure one out, or at least by virtue of being wrong on the internet I ll be able to get someone who knows what they re doing to tell me how the thing really works. Bottom plate The motor drive attaches to the camera using the tripod socket, a replacement tripod socket is provided on the base of plate. There s also a metal plate with the bottom of the hand grip attached to it held on to the base plate with a thumb screw. When this is released it gives access to the screw holding in the battery compartment which (very conveniently) takes 6 AA batteries. This also provides power to the camera body when attached. Bottom plate with battery compartment visible On the back of the base of the camera there s a button with a red LED next to it which illuminates slightly when the button is pressed (it s visible in low light only). I m not 100% sure what this is for, I d have guessed a battery check if the light were easier to see. Top of drive On the top of the camera there is a hot shoe (with a plastic blanking plate, a nice touch), a mode selector and two buttons. The larger button on the front replicates the shutter release button on the body (which continues to function as well) while the smaller button to the rear of the camera controls the motor depending on the current state of the camera it cocks the shutter, winds the film and resets the mirror when it is locked up. The mode dial offers three modes: off, S and C. S and C appear to correspond to the S and C modes of the main camera, single and continuous mirror lockup shots. Overall with this grip fitted and a prism attached the camera operates very similarly to a 35mm SLR in terms of film winding and so on. It is of course heavier (the whole setup weighs in at 2.5kg) but balanced very well and the grip is very comfortable to use.

11 February 2017

Noah Meyerhans: Using FAI to customize and build your own cloud images

At this past November's Debian cloud sprint, we classified our image users into three broad buckets in order to help guide our discussions and ensure that we were covering the common use cases. Our users fit generally into one of the following groups:
  1. People who directly launch our image and treat it like a classic VPS. These users most likely will be logging into their instances via ssh and configuring it interactively, though they may also install and use a configuration management system at some point.
  2. People who directly launch our images but configure them automatically via launch-time configuration passed to the cloud-init process on the agent. This automatic configuration may optionally serve to bootstrap the instance into a more complete configuration management system. The user may or may not ever actually log in to the system at all.
  3. People who will not use our images directly at all, but will instead construct their own image based on ours. They may do this by launching an instance of our image, customizing it, and snapshotting it, or they may build a custom image from scratch by reusing and modifying the tools and configuration that we use to generate our images.
This post is intended to help people in the final category get started with building their own cloud images based on our tools and configuration. As I mentioned in my previous post on the subject, we are using the FAI project with configuration from the fai-cloud-images. It's probably a good idea to get familiar with FAI and our configs before proceeding, but it's not necessary. You'll need to use FAI version 5.3.4 or greater. 5.3.4 is currently available in stretch and jessie-backports. Images can be generated locally on your non-cloud host, or on an existing cloud instance. You'll likely find it more convenient to use a cloud instance so you can avoid the overhead of having to copy disk images between hosts. For the most part, I'll assume throughout this document that you're generating your image on a cloud instance, but I'll highlight the steps where it actually matters. I'll also be describing the steps to target AWS, though the general workflow should be similar if you're targeting a different platform. To get started, install the fai-server package on your instance and clone the fai-cloud-images git repository. (I'll assume the repository is cloned to /srv/fai/config.) In order to generate your own disk image that generally matches what we've been distributing, you'll use a command like:
sudo fai-diskimage --hostname stretch-image --size 8G \
--class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2 \
/tmp/stretch-image.raw
This command will create an 8 GB raw disk image at /tmp/stretch-image.raw, create some partitions and filesystems within it, and install and configure a bunch of packages into it. Exactly what packages it installs and how it configures them will be determined by the FAI config tree and the classes provided on the command line. The package_config subdirectory of the FAI configuration contains several files, the names of which are FAI classes. Activating a given class by referencing it on the fai-diskimage command line instructs FAI to process the contents of the matching package_config file if such a file exists. The files use a simple grammar that provides you with the ability to request certain packages to be installed or removed. Let's say for example that you'd like to build a custom image that looks mostly identical to Debian's images, but that also contains the Apache HTTP server. You might do that by introducing a new file to package_config/HTTPD file, as follows:
PACKAGES install
apache2
Then, when running fai-diskimage, you'll add HTTPD to the list of classes:
sudo fai-diskimage --hostname stretch-image --size 8G \
--class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2,HTTPD \
/tmp/stretch-image.raw
Aside from custom package installation, you're likely to also want custom configuration. FAI allows the use of pretty much any scripting language to perform modifications to your image. A common task that these scripts may want to perform is the installation of custom configuration files. FAI provides the fcopy tool to help with this. Fcopy is aware of FAI's class list and is able to select an appropriate file from the FAI config's files subdirectory based on classes. The scripts/EC2/10-apt script provides a basic example of using fcopy to select and install an apt sources.list file. The files/etc/apt/sources.list/ subdirectory contains both an EC2 and a GCE file. Since we've enabled the EC2 class on our command line, fcopy will find and install that file. You'll notice that the sources.list subdirectory also contains a preinst file, which fcopy can use to perform additional actions prior to actually installing the specified file. postinst scripts are also supported. Beyond package and file installation, FAI also provides mechanisms to support debconf preseeding, as well as hooks that are executed at various stages of the image generation process. I recommend following the examples in the fai-cloud-images repo, as well as the FAI guide for more details. I do have one caveat regarding the documentation, however: FAI was originally written to help provision bare-metal systems, and much of its documentation is written with that use case in mind. The cloud image generation process is able to ignore a lot of the complexity of these environments (for example, you don't need to worry about pxeboot and tftp!) However, this means that although you get to ignore probably half of the FAI Guide, it's not immediately obvious which half it is that you get to ignore. Once you've generated your raw image, you can inspect it by telling Linux about the partitions contained within, and then mount and examine the filesystems. For example:
admin@ip-10-0-0-64:~$ sudo partx --show /tmp/stretch-image.raw
NR START      END  SECTORS SIZE NAME UUID
 1  2048 16777215 16775168   8G      ed093314-01
admin@ip-10-0-0-64:~$ sudo partx -a /tmp/stretch-image.raw 
partx: /dev/loop0: error adding partition 1
admin@ip-10-0-0-64:~$ lsblk 
NAME      MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
xvda      202:0    0      8G  0 disk 
 xvda1   202:1    0 1007.5K  0 part 
 xvda2   202:2    0      8G  0 part /
loop0       7:0    0      8G  0 loop 
 loop0p1 259:0    0      8G  0 loop 
admin@ip-10-0-0-64:~$ sudo mount /dev/loop0p1 /mnt/
admin@ip-10-0-0-64:~$ ls /mnt/
bin/   dev/  home/        initrd.img.old@  lib64/       media/  opt/   root/  sbin/  sys/  usr/  vmlinuz@
boot/  etc/  initrd.img@  lib/             lost+found/  mnt/    proc/  run/   srv/   tmp/  var/  vmlinuz.old@
In order to actually use your image with your cloud provider, you'll need to register it with them. Strictly speaking, these are the only steps that are provider specific and need to be run on your provider's cloud infrastructure. AWS documents this process in the User Guide for Linux Instances. The basic workflow is:
  1. Attach a secondary EBS volume to your EC2 instance. It must be large enough to hold the raw disk image you created.
  2. Use dd to write your image to the secondary volume, e.g. sudo dd if=/tmp/stretch-image.raw of=/dev/xvdb
  3. Use the volume-to-ami.sh script in the fail-cloud-image repo to snapshot the volume and register the resulting snapshot with AWS as a new AMI. Example: ./volume-to-ami.sh vol-04351c30c46d7dd6e
The volume-to-ami.sh script must be run with access to AWS credentials that grant access to several EC2 API calls: describe-snapshots, create-snapshot, and register-image. It recognizes a --help command-line flag and several options that modify characteristics of the AMI that it registers. When volume-to-ami.sh completes, it will print the AMI ID of your new image. You can now work with this image using standard AWS workflows. As always, we welcome feedback and contributions via the debian-cloud mailing list or #debian-cloud on IRC.

1 February 2017

Antoine Beaupr : Testing new hardware with Stressant

I got a new computer and wondered... How can I test it? One of those innocent questions that brings hours and hours of work and questionning...

A new desktop: Intel NUC devices After reading up on Jeff Atwood's blog and especially his article on the scooter computer, I have discovered a whole range of small computers that could answer my need for a faster machine in my office at a low price tag and without taking up too much of my precious desk space. After what now seems like a too short review I ended up buying a new Intel NUC device from NCIX.com, along with 16GB of RAM and an amazing 500GB M.2 hard drive for around 750$. I am very happy with the machine. It's very quiet and takes up zero space on my desk as I was able to screw it to the back of my screen. You can see my review of the hardware compatibility and installation report in the Debian wiki. I wish I had taken more time to review the possible alternatives - for example I found out about the amazing Airtop PC recently and, although that specific brand is a bit too expensive, the space of small computers is far and wide and deserves a more thorough review than just finding the NUC by accident while shopping for laptops on System76.com...

Reviving the Stressant project But this, and Atwood's Is Your Computer Stable? article, got me thinking about how to test new computers. It's one thing to build a machine and fire it up, but how do you know everything is actually really working? It is common practice to do a basic stress test or burn-in when you get a new machine in the industry - how do you proceed with such tests? Back in the days when I was working at Koumbit, I wrote a tool exactly for that purpose called Stressant. Since I am the main author of the project and I didn't see much activity on it since I left, I felt it would be a good idea to bring it under my personal wing again, and I have therefore moved it to my Gitlab where I hope to bring it back to life. Parts of the project's rationale are explained in an "Intent To Package" the "breakin" tool (Debian bug #707178), which, after closer examination, ended up turning into a complete rewrite. The homepage has a bit more information about how the tool works and its objectives, but generally, the idea is to have a live CD or USB stick that you can just plugin into a machine to run a battery of automated tests (memtest86, bonnie++, stress-ng and disk wiping, for example) or allow for interactive rescue missions on broken machines. At Koumbit, we had Debirf-based live images that we could boot off the network fairly easily that we would use for various purposes, although nothing was automated yet. The tool is based on Debian, but since it starts from boot, it should be runnable on any computer. I was able to bring the project back to life, to a certain extent, by switching to vmdebootstrap instead of debirf for builds, but that removed netboot support. Also, I hope that Gitlab could provide with an autobuilder for the images, but unfortunately there's a bug in Docker that makes it impossible to mount loop images in Docker images (which makes it impossible to build Docker in Docker, apparently).

Should I start yet another project? So there's still a lot of work to do in this project to get it off the ground. I am still a bit hesitant in getting into this, however, for a few reasons:
  1. It's yet another volunteer job - which I am trying to reduce for health and obvious economic reasons. That's a purely personal reason and there isn't much you can do about it.
  2. I am not sure the project is useful. It's one thing to build a tool that can do basic tests on a machine - I can probably just build an live image for myself that will do everything I need - it's another completely different thing to build something that will scale to multiple machines and be useful for more various use cases and users.
(A variation of #1 is how everything and everyone is moving to the cloud. It's become a common argument that you shouldn't run your own metal these days, and we seem to be fighting an uphill economic battle when we run our own datacenters, rack or even physical servers these days. I still think it's essential to have some connexion to metal to be autonomous in our communications, but I'm worried that focusing on such a project is another of my precious dead entreprises... ) Part #2 is obviously where you people come in. Here's a few questions I'd like to have feedback on:
  1. (How) do you perform stress-testing of your machines before putting them in production (or when you find issues you suspect to be hardware-related)?
  2. Would a tool like breakin or stressant be useful in your environment?
  3. Which tools do you use now for such purposes?
  4. Would you contribute to such a project? How?
  5. Do you think there is room for such a project in the existing ecology of projects) or should I contribute to an existing project?
Any feedback here would be, of course, greatly appreciated.

13 December 2016

Shirish Agarwal: Eagle Encounters, pier Stellenbosch

Before starting, have to say hindsight as they say is always 20/20. I was moaning about my 6/7 hour trip few blog posts back but now came to know about the 17.5 hr. flights (17.5x800km/hr=14000 km.) which are happening around me. I would say I was whining about nothing seeing those flights. I can t even imagine how people would feel in those flights. Six hours were too much in the tin-can, thankfully though I was in the aisle seat. In 14 hours most people would probably give to Air rage . I just saw an excellent article on the subject. I also came to know that seat-selection and food on a long-haul flights are a luxury, hence that changes the equation quite a bit as well. So on these facts, it seems Qatar Airways treated me quite well as I was able to use both those options. Disclaimer My knowledge about birds/avian is almost non-existent, Hence feel free to correct me if I do go wrong anywhere. Coming back to earth literally , I will have to share a bit of South Africa as that is part and parcel of what I m going to share next. Also many of the pictures shared in this particular blog post belong to KK who has shared them with me with permission to share it with the rest of the world. When I was in South Africa, in the first couple of days as well as what little reading of South African History I had read before travelling, had known that the Europeans, specifically the Dutch ruled on South Africa for many years. What was shared to me in the first day or two that Afrikaans is mostly spoken by Europeans still living in South Africa, some spoken by the coloured people as well. This tied in with the literature I had already read. The Wikipedia page shares which language is spoken by whom and how the demographics play out if people are interested to know that. One of the words or part of the word for places we came to know is bosch as is used in many a places. Bosch means wood or forest. After this we came to know about many places which were known as somethingbosch which signified to us that area is or was a forest. On the second/third day Chirayu (pictured, extreme left) shared the idea of going to Eagle Encounters. Other people pictured in the picture are yours truly, some of the people from GSOC, KK is in the middle, the driver Leonard something who took us to Eagle Encounters on the right (pictured extreme right). Update I was informed that it was a joint plan between Chirayu and KK. They also had some other options planned which later got dropped by the wayside. The whole gang/group along with Leonard coming from eagle encounters It was supposed to be somewhat near, (Spier, Stellenbosch). While I was not able to able to see/figure out where Eagle Encounters is on Openstreetmap, somebody named Firefishy added Spier to OSM few years back. So thank you for that Firefishy so I can at least pin-point a closer place. I didn t see/know/try to figure out about the place as Chirayu said it s a zoo . I wasn t enthusiastic as much as I had been depressed by most zoos in India, while you do have national reserves/Parks in India where you see animals in their full glory. I have been lucky to been able to seen Tadoba and Ranthambore National parks and spend some quality time (about a week) to have some idea as to what can/happens in forests and people living in the buffer-zones but those stories are for a different day altogether. I have to say I do hope to be part of the Ranthambore experience again somewhere in the future, it really is a beautiful place for flora and fauna and fortunately or unfortunately this is the best time apart from spring, as you have the game of mist/fog and animals . North India this time of the year is something to be experienced. I wasn t much enthused as zoos in India are claustrophobic for animals and people both. There are small cages and you see and smell the shit/piss of the animals, generally not a good feeling. Chirayu shared with us also the possibility of being able to ride of Segways and range of bicycles which relieved me so that in case we didn t enjoy the zoo we would enjoy the Segway at least and have a good time (although it would have different expenses than the ones at Eagle Encounters). My whole education about what a zoo could be was turned around at Eagle Encounters as it seems to be somewhere between a zoo and what I know as national parks where animals roam free. We purchased the tickets and went in, the first event/happening was Eagle Encounters itself. One of the families at Eagle Encounter handling a snowy eagle Our introduction to the place started by two beautiful volunteer/trainers who were in charge of all the birds in the Eagle Encounters vicinity. The introduction started by every one of us who came for the Eagle Encounter show to wear a glove and to have/hold one of the pair of snowy owls to sit on the glove. That picture is of a family who was part of our show. Before my turn came, I was a little apprehensive/worried about holding a Owl -period. To my surprise, they were so soft and easy-going, I could hardly feel the weight on my hand. While the trainer/volunteers were constantly feeding them earthworm-bits (I didn t ask, just guessing) and we were all happy as they along with the visitors were constantly playing and interacting with the birds, sharing with us the life-cycle of the snowy Owl. It s only then I understood why in the Harry Potter Universe, the owl plays such an important part. They seem to be a nice, curious, easy-going, proud creatures which fits perfectly in the HP Universe. In hind-sight I should have videod the whole experience as the trainer/volunteer showed a battery of owls, eagles, vultures, Hawks (different birds of prey) what have you. I have to confess my knowledge of birds is and was non-existent. Vulture at the Eagle Encounters show Vulture, One of the larger birds we saw at the Eagle Encounters show. Some of the birds could be dangerous, especially in the wild. The other trainer showing off a Black Eagle at Eagle Encounters That was the other Volunteer-Trainer who was showing off the birds. I especially liked the t-shirt she was wearing. The shop at Eagle Encounters had whole lot of them, they were a bit expensive and just not my size Tidbit Just a few years ago, it was a shocker to me to know/realize that what commonly goes/known in the country as a parrot by most people is actually a Parakeet. As can be seen in the article linked, they are widely distributed in India. While I was young, I used to see the rose-ringed parakeets quite a bit around but nowadays due to probably pollution and other factors, they are noticeably less. They are popular as pets in India. I don t know what Pollito would think about that, don t think he would think good. Trainer showing off a Hawk at Eagle Encounters As I cannot differentiate between Hawk, Vulture, Eagle, etc. I would safely say a Bird of Prey as that was what he was holding. This photo was taken after the event was over where we all were curious to know about the volunteer/trainer, their day job and what it meant for them to be taking care of these birds. Update KK has shared with me what those specific birds are called, so in case the names or species are wrong, please take the truck with her and not me. While I don t remember the name of the trainer/volunteer, among other things it was shared that the volunteers/trainers aren t paid enough and they never have enough funds to take care of all the birds who come to them. Trainer showing Hawk and background chart Where the picture was shot (both this and earlier) was sort of open-office. If you look closely, you will see that there are names of the birds, for instance, people who loved LOTR would easily see Gandalf . that board lists how much food (probably in grams) did the bird eat in a day and week. While it was not shared, I m sure there would be a lot of paperwork, studies to get the birds as well as possible. From a computer science perspective, there seemed to be lot of potential for avian and big-data professionals to do lot of computer modelling and analysis and give more insight into the rehabilitation efforts so the process could be more fine-tuned, efficient and economic perhaps. Hawk on stand This is how we saw the majority of the birds. Most of them had a metal/plastic string which was tied to small artificial branches as the one above. I forgot to share a very important point. Eagle Encounters is not a zoo but a Rehabilitation Centre. While the cynic/skeptic part of me tried to not feel or see the before and after pictures of the birds bought to the rehabilitation centre, the caring part was moved to see most of the birds being treated with love and affection. From our conversations with the Volunteer-Trainer it emerged that every week they had to turn away lots of birds due to space constraints. It is only the most serious/life-threatening cases for which they could provide care in a sustainable way they would keep. Some of the birds who were in the cages were large, airy. I wouldn t say clean as what little I read before as well later is that birds shit enormously so cleaning cages is quite an effort. Most of the cages and near those artificial branches there were placards of people who were sponsoring a bird or two to look after them. From what was shared, many of the birds who came had been abused in many ways. Some of them had their bones crushed or/and other cruel ways. As I had shared that I had been wonderfully surprised by seeing birds come so close to me and most of my friends, I felt rage about those who had treated the birds in such evil, bad ways. What was shared with us that while they try to heal the birds as much as possible, it is always suspect how well the birds would survive on their own in nature, hence many of these birds would go to the sponsor or to some other place when they are well. The Secretary birds - cage- sponsors-adopted If you look at the picture closely, maybe look at the higher resolution photo in the gallery, you will see that both the birds have been adopted by two different couples. The birds as the name tag shows are called Secretaries . The Secretaries make a typical sound which is similar to the sound made by old typewriters. Just as woodpeckers make Morse Code noises when they are pecking with their beaks on trees, something similar to the sound of keys emitted by Old Remington typewriters when clicked on was done by the Secretaries. One of the birds in the cage, This is one of the birds in one of the few cages. If you see a higher-resolution picture of the earlier picture, the one which has Secretaries . Also as can be seen in the picture, there is wood-working happening and they are trying to expand the Rehabilitation Centre. All in all, an excursion which was supposed to be for just an hour, extended to something like 3 odd hours. KK shot more than a 1000 odd pictures while trying to teach/converse in Malyalam to some of the birds. She shot well over 1000 photos which would have filled something like 30 odd traditional photo albums. Jaminy (KK s partner-in-crime) used her selfie stick to desired effect, taking pictures with most of the birds as one does with celebrities. I had also taken some but most of them were over-exposed as was new to mobile photography at that time, still am but mostly it works. Lake with Barn Owls near Eagle Encounters That is the lake we discovered/saw after coming back from Eagle Encounters. We had good times. Lastly, a virtual prize distribution ceremony a. Chirayu and KK A platinum trophy for actually thinking and pitching the place in the first place. b. Shirish and Deven Bansod Metal cups for not taking more than 10 minutes to freshen up and be back after hearing the plan to go to Eagle Encounters. c. All the girls/women Spoons for actually making it to the day. All the girls took quite sometime to freshen up, otherwise it might have been possible to also experience the Segways, who knows. All-in-all an enjoyable day spent in being part of Eagle Encounters .
Filed under: Miscellenous Tagged: #Birds of Prey, #Debconf16, #Eagle Encounters, #Rehabilitation, #South African History, #Stellenbosch

15 November 2016

Antoine Beaupr : The Turris Omnia router: help for the IoT mess?

The Turris Omnia router is not the first FLOSS router out there, but it could well be one of the first open hardware routers to be available. As the crowdfunding campaign is coming to a close, it is worth reflecting on the place of the project in the ecosystem. Beyond that, I got my hardware recently, so I was able to give it a try.

A short introduction to the Omnia project The Turris Omnia Router The Omnia router is a followup project on CZ.NIC's original research project, the Turris. The goal of the project was to identify hostile traffic on end-user networks and develop global responses to those attacks across every monitored device. The Omnia is an extension of the original project: more features were added and data collection is now opt-in. Whereas the original Turris was simply a home router, the new Omnia router includes:
  • 1.6GHz ARM CPU
  • 1-2GB RAM
  • 8GB flash storage
  • 6 Gbit Ethernet ports
  • SFP fiber port
  • 2 Mini-PCI express ports
  • mSATA port
  • 3 MIMO 802.11ac and 2 MIMO 802.11bgn radios and antennas
  • SIM card support for backup connectivity
Some models sold had a larger case to accommodate extra hard drives, turning the Omnia router into a NAS device that could actually serve as a multi-purpose home server. Indeed, it is one of the objectives of the project to make "more than just a router". The NAS model is not currently on sale anymore, but there are plans to bring it back along with LTE modem options and new accessories "to expand Omnia towards home automation". Omnia runs a fork of the OpenWRT distribution called TurrisOS that has been customized to support automated live updates, a simpler web interface, and other extra features. The fork also has patches to the Linux kernel, which is based on Linux 4.4.13 (according to uname -a). It is unclear why those patches are necessary since the ARMv7 Armada 385 CPU has been supported in Linux since at least 4.2-rc1, but it is common for OpenWRT ports to ship patches to the kernel, either to backport missing functionality or perform some optimization. There has been some pressure from backers to petition Turris to "speedup the process of upstreaming Omnia support to OpenWrt". It could be that the team is too busy with delivering the devices already ordered to complete that process at this point. The software is available on the CZ-NIC GitHub repository and the actual Linux patches can be found here and here. CZ.NIC also operates a private GitLab instance where more software is available. There is technically no reason why you wouldn't be able to run your own distribution on the Omnia router: OpenWRT development snapshots should be able to run on the Omnia hardware and some people have installed Debian on Omnia. It may require some customization (e.g. the kernel) to make sure the Omnia hardware is correctly supported. Most people seem to prefer to run TurrisOS because of the extra features. The hardware itself is also free and open for the most part. There is a binary blob needed for the 5GHz wireless card, which seems to be the only proprietary component on the board. The schematics of the device are available through the Omnia wiki, but oddly not in the GitHub repository like the rest of the software.

Hands on I received my own router last week, which is about six months late from the original April 2016 delivery date; it allowed me to do some hands-on testing of the device. The first thing I noticed was a known problem with the antenna connectors: I had to open up the case to screw the fittings tight, otherwise the antennas wouldn't screw in correctly. Once that was done, I simply had to go through the usual process of setting up the router, which consisted of connecting the Omnia to my laptop with an Ethernet cable, connecting the Omnia to an uplink (I hooked it into my existing network), and go through a web wizard. I was pleasantly surprised with the interface: it was smooth and easy to use, but at the same time imposed good security practices on the user. Install wizard performing automatic updates For example, the wizard, once connected to the network, goes through a full system upgrade and will, by default, automatically upgrade itself (including reboots) when new updates become available. Users have to opt-in to the automatic updates, and can chose to automate only the downloading and installation of the updates without having the device reboot on its own. Reboots are also performed during user-specified time frames (by default, Omnia applies kernel updates during the night). I also liked the "skip" button that allowed me to completely bypass the wizard and configure the device myself, through the regular OpenWRT systems (like LuCI or SSH) if I needed to. The Omnia router about to rollback to latest snapshot Notwithstanding the antenna connectors themselves, the hardware is nice. I ordered the black metal case, and I must admit I love the many LED lights in the front. It is especially useful to have color changes in the reset procedure: no more guessing what state the device is in or if I pressed the reset button long enough. The LEDs can also be dimmed to reduce the glare that our electronic devices produce. All this comes at a price, however: at \$250 USD, it is a much higher price tag than common home routers, which typically go for around \$50. Furthermore, it may be difficult to actually get the device, because no orders are being accepted on the Indiegogo site after October 31. The Turris team doesn't actually want to deal with retail sales and has now delegated retail sales to other stores, which are currently limited to European deliveries.

A nice device to help fight off the IoT apocalypse It seems there isn't a week that goes by these days without a record-breaking distributed denial-of-service (DDoS) attack. Those attacks are more and more caused by home routers, webcams, and "Internet of Things" (IoT) devices. In that context, the Omnia sets a high bar for how devices should be built but also how they should be operated. Omnia routers are automatically upgraded on a nightly basis and, by default, do not provide telnet or SSH ports to run arbitrary code. There is the password-less wizard that starts up on install, but it forces the user to chose a password in order to complete the configuration. Both the hardware and software of the Omnia are free and open. The automatic update's EULA explicitly states that the software provided by CZ.NIC "will be released under a free software licence" (and it has been, as mentioned earlier). This makes the machine much easier to audit by someone looking for possible flaws, say for example a customs official looking to approve the import in the eventual case where IoT devices end up being regulated. But it also makes the device itself more secure. One of the problems with these kinds of devices is "bit rot": they have known vulnerabilities that are not fixed in a timely manner, if at all. While it would be trivial for an attacker to disable the Omnia's auto-update mechanisms, the point is not to counterattack, but to prevent attacks on known vulnerabilities. The CZ.NIC folks take it a step further and encourage users to actively participate in a monitoring effort to document such attacks. For example, the Omnia can run a honeypot to lure attackers into divulging their presence. The Omnia also runs an elaborate data collection program, where routers report malicious activity to a central server that collects information about traffic flows, blocked packets, bandwidth usage, and activity from a predefined list of malicious addresses. The exact data collected is specified in another EULA that is currently only available to users logged in at the Turris web site. That data can then be turned into tweaked firewall rules to protect the overall network, which the Turris project calls a distributed adaptive firewall. Users need to explicitly opt-in to the monitoring system by registering on a portal using their email address. Turris devices also feature the Majordomo software (not to be confused with the venerable mailing list software) that can also monitor devices in your home and identify hostile traffic, potentially leading users to take responsibility over the actions of their own devices. This, in turn, could lead users to trickle complaints back up to the manufacturers that could change their behavior. It turns out that some companies do care about their reputations and will issue recalls if their devices have significant enough issues. It remains to be seen how effective the latter approach will be, however. In the meantime, the Omnia seems to be an excellent all-around server and router for even the most demanding home or small-office environments that is a great example for future competitors.
Note: this article first appeared in the Linux Weekly News.

31 July 2016

Enrico Zini: Links for August 2016

First post with the new link collection feature of staticsite!
Heavy Metal and Natural Language Processing [archived]
Natural language processing and Metal lyrics, including the formula for the "metalness" of a word and a list of the most and least metal words.
Confirming all use of an SSH agent [archived]
For a long time I ve wanted an ssh-agent setup that would ask me before every use, so I could slightly more comfortably forward authentication over SSH without worrying that my session might get hijacked somewhere at the remote end (I often find myself wanting to pull authenticated git repos on remote hosts). I m at DebConf this week, which is an ideal time to dig further into these things, so I did so today. As is often the case it turns out this is already possible, if you know how.
Why We Don t Report It [archived]
Why don t you report it? It s up there on every list I ve seen of things you shouldn t say to sexual assault survivors, yet I keep hearing it
Voltron, an extensible debugger UI toolkit written in Python
Multi-panel display built from various gdb outputs.
Notmuch, offlineimap and Sieve setup [archived]
Nice description of a notmuch+offlineimap+sieve setup, for when I feel like rethinking my email setup.
Wikipedia:Unusual articles
An endless source of weird and wonderful.
ZERO: no linked HIV transmissions [archived]
The results provide a dataset to question whether transmission with an undetectable viral load is actually possible. They should help normalise HIV and challenge stigma and discrimination.
TV pickup
Someone once in the UK told me that it was a big enough problem that so many people turn on their electric kettles during the endtitles of Eastenders, that there's an employee in a hydro plant that needs to watch it to ramp up the power at the right time. I've finally found a wikipedia page about it.
Amazon isn't saying if Echo has been wiretapped [archived]
"We may never know if the feds have hijacked Amazon Echo. In case you didn't know, Echo is an always-on device, which, when activated, can return search queries, as well as read audiobooks and report sports, traffic, and weather. It can even control smart home devices."

21 June 2016

Ian Wienand: Zuul and Ansible in OpenStack CI

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said
(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).
Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.
OpenStack CI Overview While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests. Overview of OpenStack CI with Zuul and Ansible
  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job a job definition (what to actually do) and a worker node (somewhere to do it). The first part what to do is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman. The second part somewhere to run the test takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand. Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class you can see this being created in response to an assignment via LaunchServer.assignNode. To actually run the job where the "job hits the metal" as it were the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily. When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

6 June 2016

C.J. Adams-Collier: Some work on a VyOS image with Let s Encrypt certs

I put some packages together this weekend. It s been a while since I ve debuilt anything officially. The plan is to build a binding to the libgnutls.so.30 API. The certtool CSR (REQ) generation interface does not allow me to create a CRL with not critical attributes set on purposes. Maybe if I do it a bit closer to the metal it will be easier

16 April 2016

John Goerzen: A Year of Flight

Dad-o, I m so glad you re a pilot! My 9-year-old son Jacob has been saying that, always with a big hug and his fond nickname for me ( dad-o ). It has now been a year since the first time I sat in the pilot s seat of a plane, taking my first step towards exploring the world from the sky. And now, one year after I first sat in the pilot s seat of an airborne plane, it s prompted me to think back to my own memories. vlcsnap-2015-07-24-20h35m16s96_1

Flying over the airport at Moundridge, KS Memories Back when I was a child, maybe about the age my children are now, I d be outside in the evening and see this orange plane flying overhead. Our neighbor Don had a small ultralight plane and a grass landing strip next to his house. I remember longing to be up in the sky with Don, exploring the world from up there. At that age, I didn t know all the details of why that wouldn t work I just knew I wanted to ride in it. It wasn t until I was about 11 that I flew for the first time. I still remember that TWA flight with my grandma, taking off early in the morning and flying just a little ways above the puffy clouds lit up all yellow and orange by the sunrise. Even 25 years later, that memory still holds as one of the most beautiful scenes I have ever seen. Exploring I have always been an explorer. When I go past something interesting, I love to go see what it looks like inside. I enjoy driving around Kansas with Laura, finding hidden waterfalls, old county courthouses, ghost towns, beautiful old churches, even small-town restaurants. I explore things around me, too once taking apart a lawnmower engine as a child, nowadays building HF antennas in my treetops or writing code for Linux. If there is little to learn about something, it becomes less interesting to me. I see this starting to build in my children, too. Since before they could walk, if we were waiting for something in a large building, we d go exploring. IMG_7231

A patch of rain over Hillsboro, KS The New World A pilot once told me, Nobody can become a pilot without it changing the way they see the world and then, changing their life. I doubted that. But it was true. One of the most poetic sights I know is flying a couple thousand feet above an interstate highway at night, following it to my destination. All those red and white lights, those metal capsules of thousands of lives and thousands of stories, stretching out as far as the eye can see in either direction. IMG_7099

Kansas sunset from the plane When you re in a plane, that small town nowhere near a freeway that always seemed so far away suddenly is only a 15-minute flight away, not even enough time to climb up to a high cruise altitude. Two minutes after takeoff, any number of cities that are an hour s drive away are visible simultaneously, their unique features already recognizable: a grain elevator, oil refinery, college campus, lake, whatever. And all the houses you fly over each with people in them. Some pretty similar to you, some apparently not. But pretty soon you realize that we all are humans, and we aren t all that different. You can t tell a liberal from a conservative from the sky, nor a person s race or religion, nor even see the border between states. Towns and cities are often nameless from the sky, unless you re really low; only your navigation will tell you where you are. I ve had the privilege to fly to small out-of-the-way airports, the kind that have a car that pilots can use for free to go into town and get lunch, and leave the key out for them. There I ve met many friendly people. I ve also landed my little Cessna at a big commercial airport where I probably used only 1/10th of the runway, on a grass runway that was barely maintained at all. I ve flown to towns I d driven to or through many times, discovering the friendly folks at the small airport out of town. I ve flown to parts of Kansas I ve never been to before, discovered charming old downtowns and rolling hills, little bursts of rain and beautiful sunsets that seem to turn into a sea. Smith Center, KS airport terminal

Parked at the Smith Center, KS airport terminal, about to meet some wonderful people For a guy that loves exploring the nooks and crannies of the world that everyone else drives by on their way to a major destination, being a pilot has meant many soul-filling moments. Hard Work I knew becoming a pilot would be a lot of hard work, and thankfully I remembered stories like that when I finally concluded it would be worth it. I found that I had an aptitude for a lot of things that many find difficult about being a pilot: my experience with amateur radio made me a natural at talking to ATC, my fascination with maps and navigation meant I already knew how to read aviation sectional maps before I even started my training and knew how to process that information in the cockpit, my years as a system administrator and programmer trained me with a careful and methodical decision-making process. And, much to the surprise of my flight instructor, I couldn t wait to begin the part of training about navigating using VORs (VHF radio beacons). I guess he, like many student pilots, had struggled with that, but I was fascinated by this pre-GPS technology (which I still routinely use in my flight planning, as a backup in case the GPS constellation or a GPS receiver fails). So that left the reflexes of flight, the art of it, as the parts I had to work on the hardest. The exam with the FAA is not like getting your driver s license. It s a multi-stage and difficult process. So when the FAA Designated Pilot Examiner said congratulations, pilot! and later told my flight instructor that you did a really good job with this one, I felt a true sense of accomplishment. IMG_20151021_193137

Some of my prep materials Worth It Passengers in a small plane can usually hear all the radio conversations going on. My family has heard me talking to air traffic control, to small and big planes. My 6-year-old son Oliver was playing yesterday, and I saw him pick up a plane and say this: Two-four-niner-golf requesting to land on runway one-seven . Two-four-niner-golf back-taxi on one-seven Two-four-niner-golf ready to takeoff on runway one-seven! That was a surprisingly accurate representation of some communication a pilot might have (right down to the made-up tailnumber with the spelling alphabet!) 20160408_203110

It just got more involved from there! Jacob and Oliver love model train shows. I couldn t take them to one near us, but there was one in Joplin, MO. So the day before Easter, while Laura was working on her Easter sermon, two excited boys and I (frankly also excited) climbed into a plane and flew to Joplin. We had a great time at the train show, discovered a restaurant specializing in various kinds of hot dogs (of course they both wanted to eat there), played in a park, explored the city, and they enjoyed the free cookies at the general aviation terminal building while I traded tips on fun places to fly with other pilots. When it comes right down to it, the smiles of the people I fly with are the most beautiful thing in the air. IMG_20151205_183440

Jacob after his first father-son flight with me

4 April 2016

Matthew Garrett: TPMs, event logs, fine-grained measurements and avoiding fragility in remote-attestation

Trusted Platform Modules are fairly unintelligent devices. They can do some crypto, but they don't have any ability to directly monitor the state of the system they're attached to. This is worked around by having each stage of the boot process "measure" state into registers (Platform Configuration Registers, or PCRs) in the TPM by taking the SHA1 of the next boot component and performing an extend operation. Extend works like this:

New PCR value = SHA1(current value new hash)

ie, the TPM takes the current contents of the PCR (a 20-byte register), concatenates the new SHA1 to the end of that in order to obtain a 40-byte value, takes the SHA1 of this 40-byte value to obtain a 20-byte hash and sets the PCR value to this. This has a couple of interesting properties:But how do we know what those operations were? We control the bootloader and the kernel and we know what extend operations they performed, so that much is easy. But the firmware itself will have performed some number of operations (the firmware itself is measured, as is the firmware configuration, and certain aspects of the boot process that aren't in our control may also be measured) and we may not be able to reconstruct those from scratch.

Thankfully we have more than just the final PCR data. The firmware provides an interface to log each extend operation, and you can read the event log in /sys/kernel/security/tpm0/binary_bios_measurements. You can pull information out of that log and use it to reconstruct the writes the firmware made. Merge those with the writes you performed and you should be able to reconstruct the final TPM state. Hurrah!

The problem is that a lot of what you want to measure into the TPM may vary between machines or change in response to configuration changes or system updates. If you measure every module that grub loads, and if grub changes the order that it loads modules in, you also need to update your calculations of the end result. Thankfully there's a way around this - rather than making policy decisions based on the final TPM value, just use the final TPM value to ensure that the log is valid. If you extract each hash value from the log and simulate an extend operation, you should end up with the same value as is present in the TPM. If so, you know that the log is valid. At that point you can examine individual log entries without having to care about the order that they occurred in, which makes writing your policy significantly easier.

But there's another source of fragility. Imagine that you're measuring every command executed by grub (as is the case in the CoreOS grub). You want to ensure that no inappropriate commands have been run (such as ones that would allow you to modify the loaded kernel after it's been measured), but you also want to permit certain variations - for instance, you might have a primary root filesystem and a fallback root filesystem, and you're ok with either being passed as a kernel argument. One approach would be to write two lines of policy, but there's an even more flexible approach. If the bootloader logs the entire command into the event log, when replaying the log we can verify that the event description hashes to the value that was passed to the TPM. If it does, rather than testing against an explicit hash value, we can examine the string itself. If the event description matches a regular expression provided by the policy then we're good.

This approach makes it possible to write TPM policies that are resistant to changes in ordering and permit fine-grained definition of acceptable values, and which can cleanly separate out local policy, generated policy values and values that are provided by the firmware. The split between machine-specific policy and OS policy allows for the static machine-specific policy to be merged with OS-provided policy, making remote attestation viable even over automated system upgrades.

We've integrated an implementation of this kind of policy into the TPM support code we'd like to integrate into Kubernetes, and CoreOS will soon be generating known-good hashes at image build time. The combination of these means that people using Distributed Trusted Computing under Tectonic will be able to validate the state of their systems with nothing more than a minimal machine-specific policy description.

The support code for all of this should also start making it into other distributions in the near future (the grub code is already in Fedora 24), so with luck we can define a cross-distribution policy format and make it straightforward to handle this in a consistent way even in hetrogenous operating system environments. Remote attestation is a powerful tool for ensuring that your systems are in a valid state, but the difficulty of policy management has been a significant factor in making it difficult for people to deploy in their data centres. Making it easier for people to shield themselves against low-level boot attacks is a big step forward in improving the security of distributed workloads and makes bare-metal hosting a much more viable proposition.

comment count unavailable comments

18 December 2015

Martin Pitt: What s new in autopkgtest: LXD, MaaS, apt pinning, and more

The last two major autopkgtest releases (3.18 from November, and 3.19 fresh from yesterday) bring some new features that are worth spreading. New LXD virtualization backend 3.19 debuts the new adt-virt-lxd virtualization backend. In case you missed it, LXD is an API/CLI layer on top of LXC which introduces proper image management, seamlessly use images and containers on remote locations, intelligently caching them locally, automatically configure performant storage backends like zfs or btrfs, and just generally feels really clean and much simpler to use than the classic LXC. Setting it up is not complicated at all. Install the lxd package (possibly from the backports PPA if you are on 14.04 LTS), and add your user to the lxd group. Then you can add the standard LXD image server with
  lxc remote add lco https://images.linuxcontainers.org:8443
and use the image to run e. g. the libpng test from the archive:
  adt-run libpng --- lxd lco:ubuntu/trusty/i386
  adt-run libpng --- lxd lco:debian/sid/amd64
The adt-virt-lxd.1 manpage explains this in more detail, also how to use this to run tests in a container on a remote host (how cool is that!), and how to build local images with the usual autopkgtest customizations/optimizations using adt-build-lxd. I have btrfs running on my laptop, and LXD/autopkgtest automatically use that, so the performance really rocks. Kudos to St phane, Serge, Tycho, and the other LXD authors! The motivation for writing this was to make it possible to move our armhf testing into the cloud (which for $REASONS requires remote containers), but I now have a feeling that soon this will completely replace the existing adt-virt-lxc virt backend, as its much nicer to use. It is covered by the same regression tests as the LXC runner, and from the perspective of package tests that you run in it it should behave very similar to LXC. The one problem I m aware of is that autopkgtest-reboot-prepare is broken, but hardly anything is using that yet. This is a bit complicated to fix, but I expect it will be in the next few weeks. MaaS setup script While most tests are not particularly sensitive about which kind of hardware/platform they run on, low-level software like the Linux kernel, GL libraries, X.org drivers, or Mir very much are. There is a plan for extending our automatic tests to real hardware for these packages, and being able to run autopkgtests on real iron is one important piece of that puzzle. MaaS (Metal as a Service) provides just that it manages a set of machines and provides an API for installing, talking to, and releasing them. The new maas autopkgtest ssh setup script (for the adt-virt-ssh backend) brings together autopkgtest and real hardware. Once you have a MaaS setup, get your API key from the web UI, then you can run a test like this:
  adt-run libpng --- ssh -s maas -- \
     --acquire "arch=amd64 tags=touchscreen" -r wily \
     http://my.maas.server/MAAS 123DEADBEEF:APIkey
The required arguments are the MaaS URL and the API key. Without any further options you will get any available machine installed with the default release. But usually you want to select a particular one by architecture and/or tags, and install a particular distro release, which you can do with the -r/--release and --acquire options. Note that this is not wired into Ubuntu s production CI environment, but it will be. Selectively using packages from -proposed Up until a few weeks ago, autopkgtest runs in the CI environment were always seeing/using the entirety of -proposed. This often led to lockups where an application foo and one of its dependencies libbar got a new version in -proposed at the same time, and on test regressions it was not clear at all whose fault it was. This often led to perfectly good packages being stuck in -proposed for a long time, and a lot of manual investigation about root causes. . These days we are using a more fine-grained approach: A test run is now specific for a trigger , that is, the new package in -proposed (e. g. a new version of libbar) that caused the test (e. g. for foo ) to run. autopkgtest sets up apt pinning so that only the binary packages for the trigger come from -proposed, the rest from -release. This provides much better isolation between the mush of often hundreds of packages that get synced or uploaded every day. This new behaviour is controlled by an extension of the --apt-pocket option. So you can say
  adt-run --apt-pocket=proposed=src:foo,libbar1,libbar-data ...
and then only the binaries from the foo source, libbar1, and libbar-data will come from -proposed, everything else from -release. Caveat:Unfortunately apt s pinning is rather limited. As soon as any of the explicitly listed packages depends on a package or version that is only available in -proposed, apt falls over and refuses the installation instead of taking the required dependencies from -proposed as well. In that case, adt-run falls back to the previous behaviour of using no pinning at all. (This unfortunately got worse with apt 1.1, bug report to be done). But it s still helpful in many cases that don t involve library transitions or other package sets that need to land in lockstep. Unified testbed setup script There is a number of changes that need to be made to testbeds so that tests can run with maximum performance (like running dpkg through eatmydata, disabling apt translations, or automatically using the host s apt-cacher-ng), reliable apt sources, and in a minimal environment (to detect missing dependencies and avoid interference from unrelated services these days the standard cloud images have a lot of unnecessary fat). There is also a choice whether to apply these only once (every day) to an autopkgtest specific base image, or on the fly to the current ephemeral testbed for every test run (via --setup-commands). Over time this led to quite a lot of code duplication between adt-setup-vm, adt-build-lxc, the new adt-build-lxd, cloud-vm-setup, and create-nova-image-new-release. I now cleaned this up, and there is now just a single setup-commands/setup-testbed script which works for all kinds of testbeds (LXC, LXD, QEMU images, cloud instances) and both for preparing an image with adt-buildvm-ubuntu-cloud, adt-build-lx[cd] or nova, and with preparing just the current ephemeral testbed via --setup-commands. While this is mostly an internal refactorization, it does impact users who previously used the adt-setup-vm script for e. g. building Debian images with vmdebootstrap. This script is now gone, and the generic setup-testbed entirely replaces it. Misc Aside from the above, every new version has a handful of bug fixes and minor improvements, see the git log for details. As always, if you are interested in helping out or contributing a new feature, don t hesitate to contact me or file a bug report.

2 December 2015

Andrea Veri: Three years and counting

It s been a while since my last what s been happening behind the scenes e-mail so I m here to report on what has been happening within the GNOME Infrastructure, its future plans and my personal sensations about a challenge that started around three (3) years ago when Sriram Ramkrishna and Jeff Schroeder proposed my name as a possible candidate for coordinating the team that runs the systems behind the GNOME Project. All this followed by the official hiring achieved by Karen Sandler back in February 2013. The GNOME Infrastructure has finally reached stability both in terms of reliability and uptime, we didn t have any service disruption this and the past year and services have been running smoothly as they were expected to in a project like the one we are managing. As many of you know service disruptions and a total lack of maintenance were very common before I joined back in 2013, I m so glad the situation has dramatically changed and developers, users, passionates are now able to reach our websites, code repositories, build machines without experiencing slowness, downtimes or
unreachability. Additionally all these groups of people now have a reference point they can contact in case they need help when coping with the infrastructure they daily use. The ticketing system allows users to get in touch with the members of the Sysadmin Team and receive support right away within a very short period of time (Also thanks to Pagerduty, service the Foundation is kindly sponsoring) Before moving ahead to the future plans I d like to provide you a summary of what has been done during these roughly three years so you can get an idea of why I define the changes that happened to the infrastructure a complete revamp:
  1. Recycled several ancient machines migrating services off of them while consolidating them by placing all their configuration on our central configuration management platform ran by Puppet. This includes a grand total of 7 machines that were replaced by new hardware and extended warranties the Foundation kindly sponsored.
  2. We strenghten our websites security by introducing SSL certificates everywhere and recently replacing them with SHA2 certificates.
  3. We introduced several services such as Owncloud, the Commits Bot, the Pastebin, the Etherpad, Jabber, the GNOME Github mirror.
  4. We restructured the way we backup our machines also thanks to the Fedora Project sponsoring the disk space on their backup facility. The way we were used to handle backups drastically changed from early years where a magnetic tape facility was in charge of all the burden of archiving our data to today where a NetApp is used together with rdiff-backup.
  5. We upgraded Bugzilla to the latest release, a huge thanks goes to Krzesimir Nowak who kindly helped us building the migration tools.
  6. We introduced the GNOME Apprentice program open-sourcing our internal Puppet repository and cleansing it (shallow clones FTW!) from any sensitive information which now lives on a different repository with restricted access.
  7. We retired Mango and our OpenLDAP instance in favor of FreeIPA which allows users to modify their account information on their own without waiting for the Accounts Team to process the change.
  8. We documented how our internal tools are customized to play together making it easy for future Sysadmin Team members to learn how the infrastructure works and supersede existing members in case they aren t able to keep up their position anymore.
  9. We started providing hosting to the GIMP and GTK projects which now completely rely on the GNOME Infrastructure. (DNS, email, websites and other services infrastructure hosting)
  10. We started providing hosting not only to the GIMP and GTK projects but to localized communities as well such as GNOME Hispano and GNOME Greece
  11. We configured proper monitoring for all the hosted services thanks to Nagios and Check-MK
  12. We migrated the IRC network to a newer ircd with proper IRC services (Nickserv, Chanserv) in place.
  13. We made sure each machine had a configured management (mgmt) and KVM interface for direct remote access to the bare metal machine itself, its hardware status and all the operations related to it. (hard reset, reboot, shutdown etc.)
  14. We upgraded MoinMoin to the latest release and made a substantial cleanup of old accounts, pages marked as spam and trashed pages.
  15. We deployed DNSSEC for several domains we manage including gnome.org, guadec.es, gnomehispano.es, guadec.org, gtk.org and gimp.org
  16. We introduced an account de-activation policy which comes into play when a contributor not committing to any of the hosted repositories at git.gnome.org since two years is caught by the script. The account in question is marked as inactive and the gnomecvs (from the old cvs days) and ftpadmin groups are removed.
  17. We planned mass reboots of all the machines roughly every month for properly applying security and kernel updates.
  18. We introduced Mirrorbrain (MB), the mirroring service serving GNOME and related modules tarballs and software all over the world. Before introducing MB GNOME had several mirrors located in all the main continents and at the same time a very low amount of users making good use of them. Many organizations and companies behind these mirrors decided to not host GNOME sources anymore as the statistics of usage were very poor and preferred providing the same service to projects that really had a demand for these resources. MB solved all this allowing a proper redirect to the closest mirror (through mod_geoip) and making sure the sources checksum matched across all the mirrors and against the original tarball uploaded by a GNOME maintainer and hosted at master.gnome.org.
I can keep the list going for dozens of other accomplished tasks but I m sure many of you are now more interested in what the future plans actually are in terms of where the GNOME Infrastructure should be in the next couple of years. One of the main topics we ve been discussing will be migrating our Git infrastructure away from cgit (which is mainly serving as a code browsing tool) to a more complete platform that is surely going to include a code review tool of some sort. (Gerrit, Gitlab, Phabricator) Another topic would be migrating our mailing lists to Mailman 3 / Hyperkitty. This also means we definitely need a staging infrastructure in place for testing these kind of transitions ideally bound to a separate Puppet / Ansible repository or branch. Having a different repository for testing purposes will also mean helping apprentices to test their changes directly on a live system and not on their personal computer which might be running a different OS / set of tools than the ones we run on the GNOME Infrastructure. What I also aim would be seeing GNOME Accounts being the only authentication resource in use within the whole GNOME Infrastructure. That means one should be able to login to a specific service with the same username / password in use on the other hosted services. That s been on my todo list for a while already and it s probably time to push it forward together with Patrick Uiterwijk, responsible of Ipsilon s development at Red Hat and GNOME Sysadmin. While these are the top priority items we are soon receiving new hardware (plus extended warranty renewals for two out of the three machines that had their warranty renewed a while back) and migrating some of the VMs off from the current set of machines to the new boxes is definitely another task I d be willing to look at in the next couple of months (one machine (ns-master.gnome.org) is being decommissioned giving me a chance to migrate away from BIND into NSD). The GNOME Infrastructure is evolving and it s crucial to have someone maintaining it. On this side I m bringing to your attention the fact the assigned Sysadmin funds are running out as reported on the Board minutes from the 27th of October. On this side Jeff Fortin started looking for possible sponsors and came up with the idea of making a brochure with a set of accomplished tasks that couldn t have been possible without the Sysadmin fundraising campaign launched by Stormy Peters back in June 2010 being a success. The Board is well aware of the importance of having someone looking at the infrastructure that runs the GNOME Project and is making sure the brochure will be properly reviewed and published. And now some stats taken from the Puppet Git Repository:
$ cd /git/GNOME/puppet && git shortlog -ns
3520 Andrea Veri
506 Olav Vitters
338 Owen W. Taylor
239 Patrick Uiterwijk
112 Jeff Schroeder
71 Christer Edwards
4 Daniel Mustieles
4 Matanya Moses
3 Tobias Mueller
2 John Carr
2 Ray Wang
1 Daniel Mustieles Garc a
1 Peter Baumgarten
and from the Request Tracker database (52388 being my assigned ID):
mysql> select count(*) from Tickets where LastUpdatedBy = '52388';
+----------+
  count(*)  
+----------+
  3613  
+----------+
1 row in set (0.01 sec)
mysql> select count(*) from Tickets where LastUpdatedBy = '52388' and Status = 'Resolved';
+----------+
  count(*)  
+----------+
  1596  
+----------+
1 row in set (0.03 sec)
It s been a long run which made me proud, for the things I learnt, for the tasks I ve been able to accomplish, for the great support the GNOME community gave me all the time and most of all for the same fact of being part of the team responsible of the systems hosting the GNOME Project. Thank you GNOME community for your continued and never ending backing, we daily work to improve how the services we host are delivered to you and the support we receive back is fundamental for our passion and enthusiasm to remain high!

Next.

Previous.